7 research outputs found

    Advanced Multimodal Solutions for Information Presentation

    Get PDF
    High-workload, fast-paced, and degraded sensory environments are the likeliest candidates to benefit from multimodal information presentation. For example, during EVA (Extra-Vehicular Activity) and telerobotic operations, the sensory restrictions associated with a space environment provide a major challenge to maintaining the situation awareness (SA) required for safe operations. Multimodal displays hold promise to enhance situation awareness and task performance by utilizing different sensory modalities and maximizing their effectiveness based on appropriate interaction between modalities. During EVA, the visual and auditory channels are likely to be the most utilized with tasks such as monitoring the visual environment, attending visual and auditory displays, and maintaining multichannel auditory communications. Previous studies have shown that compared to unimodal displays (spatial auditory or 2D visual), bimodal presentation of information can improve operator performance during simulated extravehicular activity on planetary surfaces for tasks as diverse as orientation, localization or docking, particularly when the visual environment is degraded or workload is increased. Tactile displays offer a third sensory channel that may both offload information processing effort and provide a means to capture attention when urgently required. For example, recent studies suggest that including tactile cues may result in increased orientation and alerting accuracy, improved task response time and decreased workload, as well as provide self-orientation cues in microgravity on the ISS (International Space Station). An important overall issue is that context-dependent factors like task complexity, sensory degradation, peripersonal vs. extrapersonal space operations, workload, experience level, and operator fatigue tend to vary greatly in complex real-world environments and it will be difficult to design a multimodal interface that performs well under all conditions. As a possible solution, adaptive systems have been proposed in which the information presented to the user changes as a function of taskcontext-dependent factors. However, this presupposes that adequate methods for detecting andor predicting such factors are developed. Further, research in adaptive systems for aviation suggests that they can sometimes serve to increase workload and reduce situational awareness. It will be critical to develop multimodal display guidelines that include consideration of smart systems that can select the best display method for a particular contextsituation.The scope of the current work is an analysis of potential multimodal display technologies for long duration missions and, in particular, will focus on their potential role in EVA activities. The review will address multimodal (combined visual, auditory andor tactile) displays investigated by NASA, industry, and DoD (Dept. of Defense). It also considers the need for adaptive information systems to accommodate a variety of operational contexts such as crew status (e.g., fatigue, workload level) and task environment (e.g., EVA, habitat, rover, spacecraft). Current approaches to guidelines and best practices for combining modalities for the most effective information displays are also reviewed. Potential issues in developing interface guidelines for the Exploration Information System (EIS) are briefly considered

    3D-Sonification for Obstacle Avoidance in Brownout Conditions

    Get PDF
    Helicopter brownout is a phenomenon that occurs when making landing approaches in dusty environments, whereby sand or dust particles become swept up in the rotor outwash. Brownout is characterized by partial or total obscuration of the terrain, which degrades visual cues necessary for hovering and safe landing. Furthermore, the motion of the dust cloud produced during brownout can lead to the pilot experiencing motion cue anomalies such as vection illusions. In this context, the stability and guidance control functions can be intermittently or continuously degraded, potentially leading to undetected surface hazards and obstacles as well as unnoticed drift. Safe and controlled landing in brownout can be achieved using an integrated presentation of LADAR and RADAR imagery and aircraft state symbology. However, though detected by the LADAR and displayed on the sensor image, small obstacles can be difficult to discern from the background so that changes in obstacle elevation may go unnoticed. Moreover, pilot workload associated with tracking the displayed symbology is often so high that the pilot cannot give sufficient attention to the LADAR/RADAR image. This paper documents a simulation evaluating the use of 3D auditory cueing for obstacle avoidance in brownout as a replacement for or compliment to LADAR/RADAR imagery

    Using Published HRTFS with Slab3D: Metric-Based Database Selection and Phenomena Observed

    Get PDF
    Presented at the 20th International Conference on Auditory Display (ICAD2014), June 22-25, 2014, New York, NY.In this paper, two publicly available head-related transfer function (HRTF) database collections are analyzed for use with the open-source slab3d rendering system. After conversion to the slab3d HRTF database format (SLH), a set of visualization tools and a five-step metric-based process are used to select a subset of databases for general use. The goal is to select a limited subset least likely to contain anomalous behavior or measurement error. The described set of open-source tools can be applied to any HRTF database converted to the slab3d format

    A trans-acting locus regulates an anti-viral expression network and type 1 diabetes risk

    Get PDF
    Combined analyses of gene networks and DNA sequence variation can provide new insights into the aetiology of common diseases that may not be apparent from genome-wide association studies alone. Recent advances in rat genomics are facilitating systems-genetics approaches. Here we report the use of integrated genome-wide approaches across seven rat tissues to identify gene networks and the loci underlying their regulation. We defined an interferon regulatory factor 7 (IRF7)-driven inflammatory network (IDIN) enriched for viral response genes, which represents a molecular biomarker for macrophages and which was regulated in multiple tissues by a locus on rat chromosome 15q25. We show that Epstein-Barr virus induced gene 2 (Ebi2, also known as Gpr183), which lies at this locus and controls B lymphocyte migration, is expressed in macrophages and regulates the IDIN. The human orthologous locus on chromosome 13q32 controlled the human equivalent of the IDIN, which was conserved in monocytes. IDIN genes were more likely to associate with susceptibility to type 1 diabetes (T1D)-a macrophage-associated autoimmune disease-than randomly selected immune response genes (P = 8.85 x 10(-6)). The human locus controlling the IDIN was associated with the risk of T1D at single nucleotide polymorphism rs9585056 (P = 7.0 x 10(-10); odds ratio, 1.15), which was one of five single nucleotide polymorphisms in this region associated with EBI2 (GPR183) expression. These data implicate IRF7 network genes and their regulatory locus in the pathogenesis of T1D

    Spatial Auditory Displays: Substitution and Complemenarity to Visual Displays

    Get PDF
    Presented at the 20th International Conference on Auditory Display (ICAD2014), June 22-25, 2014, New York, NY.The primary goal of this research was to compare the performance in localization of stationary targets during a simulated extra-vehicular exploration of a planetary surface. Three different types of displays were tested for aiding orientation and localization: a 3D spatial auditory display, a 2D North-up visual map, and the combination of the two in a bimodal display. Localization performance was compared under four different environmental conditions combining high and low levels of visibility and ambiguity. In a separate experiment using a similar protocol, the impact of visual workload on performance was also investigated contrasting high (Dual-Task paradigm) and low workload (Single Orientation task). A synergistic presentation of the visual and auditory information (bimodal display) lead to a significant improvement in performance (higher percent correct orientation and localization, shorter decision and localization times) compared to either unimodal condition, in particular when the visual environmental conditions were degraded. Preliminary data using the dual-task paradigm suggest that the performance with displays utilizing auditory cues was less affected by the extra demands of additional visual workload than a visual-only display

    The interaction of vision and audition in two-dimensional space

    Get PDF
    International audienceUsing a mouse-driven visual pointer, 10 participants made repeated open-loop egocentric localizations of memorized visual, auditory, and combined visual-auditory targets projected randomly across the two-dimensional frontal field (2D). The results are reported in terms of variable error, constant error and local distortion. The results confirmed that auditory and visual maps of the egocentric space differ in their precision (variable error) and accuracy (constant error), both from one another and as a function of eccentricity and direction within a given modality. These differences were used, in turn, to make predictions about the precision and accuracy within which spatially and temporally congruent bimodal visual-auditory targets are localized. Overall, the improvement in precision for bimodal relative to the best unimodal target revealed the presence of optimal integration well-predicted by the Maximum Likelihood Estimation (MLE) model. Conversely, the hypothesis that accuracy in localizing the bimodal visual-auditory targets would represent a compromise between auditory and visual performance in favor of the most precise modality was rejected. Instead, the bimodal accuracy was found to be equivalent to or to exceed that of the best unimodal condition. Finally, we described how the different types of errors could be used to identify properties of the internal representations and coordinate transformations within the central nervous system (CNS). The results provide some insight into the structure of the underlying sensorimotor processes employed by the brain and confirm the usefulness of capitalizing on naturally occurring differences between vision and audition to better understand their interaction and their contribution to multimodal perception

    Design and Evaluation of a Constraint-Based Head-Up Display for Helicopter Obstacle Avoidance During Forward Flight

    No full text
    This paper aims to reveal the effect of different display design principles in the helicopter domain. Two different obstacle avoidance support displays are evaluated during low-altitude, forward helicopter flight: a baseline Head-Up Display (HUD) is complemented either by a conventional advisory display, or a constraint-based display inspired by Ecological Interface Design. The latter has only been sparsely applied in the helicopter domain. It is hypothesized that the advisory display reduces workload, increases situation awareness, and improves performance measures in nominal obstacle avoidance situations, while the constraint-based display increases the resilience of the pilot-vehicle system towards unexpected, off-nominal situations. Twelve helicopter pilots with varying flight experience participated in an experiment in the SIMONA Research Simulator at Delft University of Technology. Contrary to expectations, the experiment revealed no significant effects of the displays on any of the dependent measures. However, there was a trend of decreasing pilot workload and increasing situation awareness when employing any of the support displays, compared to the baseline HUD. Pilots preferred the advisory display in nominal and the constraint-based display in off-nominal situations, reproducing similar findings from research in the fixed-wing domain. The relatively short time-frame and monotony of the control-task, an already cue-rich baseline HUD condition, and similarity between the displays possibly prohibited revealing larger differences between conditions. Future research will analyze the obstacle avoidance trajectories of this experiment, possibly revealing changes in control strategy caused by the displays, even when the lumped performance measures are similar. A follow-up experiment will focus on a longer task time-frame, more variable situations, and a truly ecological display to investigate the effect of applying Ecological Interface Design and different automation systems in the helicopter domain.Invited paperControl & Simulatio
    corecore